In statistics and control theory, Kalman filtering (also known as linear quadratic estimation) is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, to produce estimates of unknown variables that tend to be more accurate than those based on a single measurement, by estimating a joint probability distribution over the variables for each time-step. The filter is constructed as a mean squared error minimiser, but an alternative derivation of the filter is also provided showing how the filter relates to maximum likelihood statistics. The filter is named after Rudolf E. Kálmán.
Kalman filtering has numerous technological applications. A common application is for guidance, navigation, and control of vehicles, particularly aircraft, spacecraft and ships positioned dynamically.
The algorithm works via a two-phase process: a prediction phase and an update phase. In the prediction phase, the Kalman filter produces estimates of the current , including their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some error, including random noise) is observed, these estimates are updated using a Weighted mean, with more weight given to estimates with greater certainty. The algorithm is recursive. It can operate in real time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required.
Optimality of Kalman filtering assumes that errors have a normal (Gaussian) distribution. In the words of Rudolf E. Kálmán: "The following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear." Regardless of Gaussianity, however, if the process and measurement covariances are known, then the Kalman filter is the best possible Linearity estimator in the minimum mean-square-error sense, although there may be better nonlinear estimators. It is a common misconception (perpetuated in the literature) that the Kalman filter cannot be rigorously applied unless all noise processes are assumed to be Gaussian. See Uhlmann and Julier for roughly a dozen instances of this misconception in the literature.
Extensions and of the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on . The basis is a hidden Markov model such that the state space of the is continuous and all latent and observed variables have Gaussian distributions. Kalman filtering has been used successfully in Sensor fusion, and distributed Sensor Networks to develop distributed or consensus Kalman filtering.
This digital filter is sometimes termed the Stratonovich–Kalman–Bucy filter because it is a special case of a more general, nonlinear filter developed by the Soviet mathematician Ruslan Stratonovich.Stratonovich, R. L. (1959). Optimum nonlinear systems which bring about a separation of a signal with constant parameters from noise. Radiofizika, 2:6, pp. 892–901.Stratonovich, R. L. (1959). On the theory of optimal non-linear filtering of random functions. Theory of Probability and Its Applications, 4, pp. 223–225.Stratonovich, R. L. (1960) Application of the Markov processes theory to optimal filtering. Radio Engineering and Electronic Physics, 5:11, pp. 1–19.Stratonovich, R. L. (1960). Conditional Markov Processes. Theory of Probability and Its Applications, 5, pp. 156–178. In fact, some of the special case linear filter's equations appeared in papers by Stratonovich that were published before the summer of 1961, when Kalman met with Stratonovich during a conference in Moscow.
This Kalman filtering was first described and developed partially in technical papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961).
Kalman filters have been vital in the implementation of the navigation systems of U.S. Navy nuclear ballistic missile submarines, and in the guidance and navigation systems of cruise missiles such as the U.S. Navy's Tomahawk missile and the U.S. Air Force's Air Launched Cruise Missile. They are also used in the guidance and navigation systems of reusable launch vehicles and the attitude control and navigation systems of spacecraft which dock at the International Space Station.
Noisy sensor data, approximations in the equations that describe the system evolution, and external factors that are not accounted for, all limit how well it is possible to determine the system's state. The Kalman filter deals effectively with the uncertainty due to noisy sensor data and, to some extent, with random external factors. The Kalman filter produces an estimate of the state of the system as an average of the system's predicted state and of the new measurement using a Weighted mean. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty are "trusted" more. The weights are calculated from the covariance, a measure of the estimated uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a better estimated uncertainty than either alone. This process is repeated at every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that Kalman filter works recursive filter and requires only the last "best guess", rather than the entire history, of a system's state to calculate a new state.
The measurements' certainty-grading and current-state estimate are important considerations. It is common to discuss the filter's response in terms of the Kalman filter's gain. The Kalman gain is the weight given to the measurements and current-state estimate, and can be "tuned" to achieve a particular performance. With a high gain, the filter places more weight on the most recent measurements, and thus conforms to them more responsively. With a low gain, the filter conforms to the model predictions more closely. At the extremes, a high gain (close to one) will result in a more jumpy estimated trajectory, while a low gain (close to zero) will smooth out noise but decrease the responsiveness.
When performing the actual calculations for the filter (as discussed below), the state estimate and covariances are coded into matrices because of the multiple dimensions involved in a single set of calculations. This allows for a representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances.
For this example, the Kalman filter can be thought of as operating in two distinct phases: predict and update. In the prediction phase, the truck's old position will be modified according to the physical laws of motion (the dynamic or "state transition" model). Not only will a new position estimate be calculated, but also a new covariance will be calculated as well. Perhaps the covariance is proportional to the speed of the truck because we are more uncertain about the accuracy of the dead reckoning position estimate at high speeds but very certain about the position estimate at low speeds. Next, in the update phase, a measurement of the truck's position is taken from the GPS unit. Along with this measurement comes some amount of uncertainty, and its covariance relative to that of the prediction from the previous phase determines how much the new measurement will affect the updated prediction. Ideally, as the dead reckoning estimates tend to drift away from the real position, the GPS measurement should pull the position estimate back toward the real position but not disturb it to the point of becoming noisy and rapidly jumping.
In most applications, the internal state is much larger (has more degrees of freedom) than the few "observable" parameters which are measured. However, by combining a series of measurements, the Kalman filter can estimate the entire internal state.
For the Dempster–Shafer theory, each state equation or observation is considered a special case of a linear belief function and the Kalman filtering is a special case of combining linear belief functions on a join-tree or Markov chain. Additional methods include which use Bayes or evidential updates to the state equations.
A wide variety of Kalman filters exists by now: Kalman's original formulation - now termed the "simple" Kalman filter, the Kalman–Bucy filter, Schmidt's "extended" filter, the information filter, and a variety of "square-root" filters that were developed by Bierman, Thornton, and many others. Perhaps the most commonly used type of very simple Kalman filter is the phase-locked loop, which is now ubiquitous in radios, especially frequency modulation (FM) radios, television sets, satellite communications receivers, outer space communications systems, and nearly any other electronics communications equipment.
In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the following framework. This means specifying the matrices, for each time-step , following:
As seen below, it is common in many applications that the matrices , , , , and are constant across time, in which case their index may be dropped.
The Kalman filter model assumes the true state at time is evolved from the state at according to
where
If is independent of time, one may, following Roweis and Ghahramani, write instead of to emphasize that the noise has no explicit knowledge of time.
At time an observation (or measurement) of the true state is made according to
where
Analogously to the situation for , one may write instead of if is independent of time.
The initial state, and the noise vectors at each step are all assumed to be mutually independent.
Many real-time dynamic systems do not exactly conform to this model. In fact, unmodeled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals as inputs. The reason for this is that the effect of unmodeled dynamics depends on the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand, independent white noise signals will not make the algorithm diverge. The problem of distinguishing between measurement noise and unmodeled dynamics is a difficult one and is treated as a problem of control theory using robust control.
The state of the filter is represented by two variables:
The algorithm structure of the Kalman filter resembles that of Alpha beta filter. The Kalman filter can be written as a single equation; however, it is most often conceptualized as two distinct phases: "Predict" and "Update". The predict phase uses the state estimate from the previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as the a priori state estimate because, although it is an estimate of the state at the current timestep, it does not include observation information from the current timestep. In the update phase, the innovation (the pre-fit residual), i.e. the difference between the current a priori prediction and the current observation information, is multiplied by the optimal Kalman gain and combined with the previous state estimate to refine the state estimate. This improved estimate based on the current observation is termed the a posteriori state estimate.
Typically, the two phases alternate, with the prediction advancing the state until the next scheduled observation, and the update incorporating the observation. However, this is not necessary; if an observation is unavailable for some reason, the update may be skipped and multiple prediction procedures performed. Likewise, if multiple independent observations are available at the same time, multiple update procedures may be performed (typically with different observation matrices H k). 2006 Corrected Version
Predicted ( a priori) state estimate | |
Predicted ( a priori) estimate covariance |
Innovation or measurement pre-fit residual | |
Innovation (or pre-fit residual) covariance | |
Optimal Kalman gain | |
Updated ( a posteriori) state estimate | |
Updated ( a posteriori) estimate covariance | k-1} |
Measurement post-fit residual |
The formula for the updated ( a posteriori) estimate covariance above is valid for the optimal Kk gain that minimizes the residual error, in which form it is most widely used in applications. Proof of the formulae is found in the derivations section, where the formula valid for any Kk is also shown.
A more intuitive way to express the updated state estimate () is:
This expression reminds us of a linear interpolation, for between 0,1. In our case:
\operatorname{E}[\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k}] &= \operatorname{E}[\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k-1}] = 0 \\ \operatorname{E}[\tilde{\mathbf{y}}_k] &= 0\end{align}
where is the expected value of . That is, all estimates have a mean error of zero.
Also:
\mathbf{P}_{k\mid k} &= \operatorname{cov}\left(\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k}\right) \\ \mathbf{P}_{k\mid k-1} &= \operatorname{cov}\left(\mathbf{x}_k - \hat{\mathbf{x}}_{k\mid k-1}\right) \\ \mathbf{S}_k &= \operatorname{cov}\left(\tilde{\mathbf{y}}_k\right)\end{align}
so covariance matrices accurately reflect the covariance of estimates.
After the covariances are set, it is useful to evaluate the performance of the filter; i.e., whether it is possible to improve the state estimation quality. If the Kalman filter works optimally, the innovation sequence (the output prediction error) is a white noise, therefore the whiteness property of the innovations measures filter performance. Several different methods can be used for this purpose.Three optimality tests with numerical examples are described in
]] Consider a truck on frictionless, straight rails. Initially, the truck is stationary at position 0, but it is buffeted this way and that by random uncontrolled forces. We measure the position of the truck every Δ t seconds, but these measurements are imprecise; we want to maintain a model of the truck's position and velocity. We show here how we derive the model from which we create our Kalman filter.
Since are constant, their time indices are dropped.
The position and velocity of the truck are described by the linear state space
x \\ \dot{x} \end{bmatrix}
where is the velocity, that is, the derivative of position with respect to time.
We assume that between the ( k − 1) and k timestep, uncontrolled forces cause a constant acceleration of a k that is normally distributed with mean 0 and standard deviation σ a. From Newton's laws of motion we conclude that
(there is no term since there are no known control inputs. Instead, a k is the effect of an unknown input and applies that effect to the state vector) where
\mathbf{F} &= \begin{bmatrix} 1 & \Delta t \\ 0 & 1 \end{bmatrix} \\[4pt] \mathbf{G} &= \begin{bmatrix} \frac{1}{2}{\Delta t}^2 \\[6pt] \Delta t \end{bmatrix}\end{align}
so that
where
\mathbf{w}_k &\sim N(0, \mathbf{Q}) \\ \mathbf{Q} &= \mathbf{G}\mathbf{G}^\textsf{T}\sigma_a^2 = \begin{bmatrix} \frac{1}{4}{\Delta t}^4 & \frac{1}{2}{\Delta t}^3 \\[6pt] \frac{1}{2}{\Delta t}^3 & {\Delta t}^2 \end{bmatrix}\sigma_a^2.\end{align}
The matrix is not full rank (it is of rank one if ). Hence, the distribution is not absolutely continuous and has no probability density function. Another way to express this, avoiding explicit degenerate distributions is given by
At each time phase, a noisy measurement of the true position of the truck is made. Let us suppose the measurement noise v k is also distributed normally, with mean 0 and standard deviation σ z.
where
and
\mathbf{R} = \mathrm{E}\left[\mathbf{v}_k \mathbf{v}_k^\textsf{T}\right] = \begin{bmatrix} \sigma_z^2 \end{bmatrix}
We know the initial starting state of the truck with perfect precision, so we initialize
and to tell the filter that we know the exact position and velocity, we give it a zero covariance matrix:
0 & 0 \\ 0 & 0 \end{bmatrix}
If the initial position and velocity are not known perfectly, the covariance matrix should be initialized with suitable variances on its diagonal:
\sigma_x^2 & 0 \\ 0 & \sigma_\dot{x}^2 \end{bmatrix}
The filter will then prefer the information from the first measurements over the information already in the model.
A similar equation holds if we include a non-zero control input. Gain matrices and covariance matrices evolve independently of the measurements . From above, the four equations needed for updating the matrices are as follows:
\mathbf{P}_{k\mid k-1} &= \mathbf{F}_k \mathbf{P}_{k-1\mid k-1} \mathbf{F}_k^\textsf{T} + \mathbf{Q}_k, \\ \mathbf{S}_k &= \mathbf{H}_k \mathbf{P}_{k\mid k-1} \mathbf{H}_k^\textsf{T} + \mathbf{R}_k, \\ \mathbf{K}_k &= \mathbf{P}_{k\mid k-1}\mathbf{H}_k^\textsf{T} \mathbf{S}_k^{-1}, \\ \mathbf{P}_{k|k} &= \left(\mathbf{I} - \mathbf{K}_k \mathbf{H}_k\right) \mathbf{P}_{k|k-1}.\end{align}
Since these depend only on the model, and not the measurements, they may be computed offline. Convergence of the gain matrices to an asymptotic matrix applies for conditions established in Walrand and Dimakis. If the series converges, then it converges exponentially to an asymptotic , assuming non-zero plant noise.Kalman, Rudolf Emil, T. S. Englar, and Richard S. Bucy. Fundamental study of adaptive control systems. Clearinghouse, US Department of Commerce, 1962. Recent analysis has shown that the rate and nature of this convergence can involve multiple coequal modes, including oscillatory components, depending on the eigenstructure of the Jacobian of the above Riccati map evaluated at . For the moving truck example described above, with and , simulation shows convergence in iterations.
Using the asymptotic gain, and assuming and are independent of , the Kalman filter becomes a linear time-invariant filter:
The asymptotic gain , if it exists, can be computed by first solving the following discrete Riccati equation for the asymptotic state covariance :
The asymptotic gain is then computed as before.
Additionally, a form of the asymptotic Kalman filter more commonly used in control theory is given by
where
This leads to an estimator of the form
substitute in the definition of
and substitute
and
and by collecting the error vectors we get
Since the measurement error v k is uncorrelated with the other terms, this becomes
by the properties of vector covariance this becomes
which, using our invariant on P k | k−1 and the definition of R k becomes
This formula (sometimes known as the Joseph form of the covariance update equation) is valid for any value of K k. It turns out that if K k is the optimal Kalman gain, this can be simplified further as shown below.
We seek to minimize the expected value of the square of the magnitude of this vector, . This is equivalent to minimizing the trace of the a posteriori estimate covariance matrix . By expanding out the terms in the equation above and collecting, we get:
The trace is minimized when its matrix calculus with respect to the gain matrix is zero. Using the gradient matrix rules and the symmetry of the matrices involved we find that
Solving this for K k yields the Kalman gain:
This gain, which is known as the optimal Kalman gain, is the one that yields MMSE estimates when used.
Referring back to our expanded formula for the a posteriori error covariance,
we find the last two terms cancel out, giving
This formula is computationally cheaper and thus nearly always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causing problems with numerical stability, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; the a posteriori error covariance formula as derived above (Joseph form) must be used.
no longer provides the actual error covariance. In other words, . In most real-time applications, the covariance matrices that are used in designing the Kalman filter are different from the actual (true) noise covariances matrices. This sensitivity analysis describes the behavior of the estimation error covariance when the noise covariances as well as the system matrices and that are fed as inputs to the filter are incorrect. Thus, the sensitivity analysis describes the robustness (or sensitivity) of the estimator to misspecified statistical and parametric inputs to the estimator.
This discussion is limited to the error sensitivity analysis for the case of statistical uncertainties. Here the actual noise covariances are denoted by and respectively, whereas the design values used in the estimator are and respectively. The actual error covariance is denoted by and as computed by the Kalman filter is referred to as the Riccati variable. When and , this means that . While computing the actual error covariance using , substituting for and using the fact that and , results in the following recursive equations for :
and
While computing , by design the filter implicitly assumes that and . The recursive expressions for and are identical except for the presence of and in place of the design values and respectively. Researches have been done to analyze Kalman filter system's robustness.Jingyang Lu.
Positive definite matrices have the property that they have a factorization into the product of a non-singular, lower-triangular matrix S and its Matrix transpose : P = S· ST . The factor S can be computed efficiently using the Cholesky factorization algorithm. This product form of the covariance matrix P is guaranteed to be symmetric, and for all 1 <= k <= n, the k-th diagonal element Pkk is equal to the euclidean norm of the k-th row of S, which is necessarily positive. An equivalent form, which avoids many of the square root operations involved in the Cholesky factorization algorithm, yet preserves the desirable numerical properties, is the U-D decomposition form, P = U· D· UT, where U is a unit triangular matrix (with unit diagonal), and D is a diagonal matrix.
Between the two, the U-D factorization uses the same amount of storage, and somewhat less computation, and is the most commonly used triangular factorization. (Early literature on the relative efficiency is somewhat misleading, as it assumed that square roots were much more time-consuming than divisions, while on 21st-century computers they are only slightly more expensive.)
Efficient algorithms for the Kalman prediction and update steps in the factored form were developed by G. J. Bierman and C. L. Thornton.
The L· D· LT decomposition of the innovation covariance matrix Sk is the basis for another type of numerically efficient and robust square root filter.
In recursive Bayesian estimation, the true state is assumed to be an unobserved Markov process, and the measurements are the observed states of a hidden Markov model (HMM).
Because of the Markov assumption, the true state is conditionally independent of all earlier states given the immediately previous state.
Similarly, the measurement at the k-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state.
Using these assumptions the probability distribution over all states of the hidden Markov model can be written simply as:
However, when a Kalman filter is used to estimate the state x, the probability distribution of interest is that associated with the current states conditioned on the measurements up to the current timestep. This is achieved by marginalizing out the previous states and dividing by the probability of the measurement set.
This results in the predict and update phases of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the ( k − 1)-th timestep to the k-th and the probability distribution associated with the previous state, over all possible .
The measurement set up to time t is
The probability distribution of the update is proportional to the product of the measurement likelihood and the predicted state.
The denominator
is a normalization term.
The remaining probability density functions are
The PDF at the previous timestep is assumed inductively to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF for given the measurements is the Kalman filter estimate.
This process has identical structure to the hidden Markov model, except that the discrete state and observations are replaced with continuous variables sampled from Gaussian distributions.
In some applications, it is useful to compute the probability that a Kalman filter with a given set of parameters (prior distribution, transition and observation models, and control inputs) would generate a particular observed signal. This probability is known as the marginal likelihood because it integrates over ("marginalizes out") the values of the hidden state variables, so it can be computed using only the observed signal. The marginal likelihood can be useful to evaluate different parameter choices, or to compare the Kalman filter against other models using Bayesian model comparison.
It is straightforward to compute the marginal likelihood as a side effect of the recursive filtering computation. By the chain rule, the likelihood can be factored as the product of the probability of each observation given previous observations,
and because the Kalman filter describes a Markov process, all relevant information from previous observations is contained in the current state estimate Thus the marginal likelihood is given by
i.e., a product of Gaussian densities, each corresponding to the density of one observation z k under the current filtering distribution . This can easily be computed as a simple recursive update; however, to avoid numeric underflow, in a practical implementation it is usually desirable to compute the log marginal likelihood instead. Adopting the convention , this can be done via the recursive update rule
An important application where such a (log) likelihood of the observations (given the filter parameters) is used is multi-target tracking. For example, consider an object tracking scenario where a stream of observations is the input, however, it is unknown how many objects are in the scene (or, the number of objects is known but is greater than one). For such a scenario, it can be unknown apriori which observations/measurements were generated by which object. A multiple hypothesis tracker (MHT) typically will form different track association hypotheses, where each hypothesis can be considered as a Kalman filter (for the linear Gaussian case) with a specific set of parameters associated with the hypothesized object. Thus, it is important to compute the likelihood of the observations for the different hypotheses under consideration, such that the most-likely one can be found.
Similarly the predicted covariance and state have equivalent information forms, defined as:
and the measurement covariance and measurement vector, which are defined as:
The information update now becomes a trivial sum.
The main advantage of the information filter is that N measurements can be filtered at each time step simply by summing their information matrices and vectors.
To predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used.
where:
If the estimation error covariance is defined so that
then we have that the improvement on the estimation of is given by:
The forward pass is the same as the regular Kalman filter algorithm. These filtered a-priori and a-posteriori state estimates , and covariances , are saved for use in the backward pass (for retrodiction).
In the backward pass, we compute the smoothed state estimates and covariances . We start at the last time step and proceed backward in time using the following recursive equations:
where
is the a-posteriori state estimate of timestep and is the a-priori state estimate of timestep . The same notation applies to the covariance.
The recursive equations are
where is the residual covariance and . The smoothed state and covariance can then be found by substitution in the equations
or
An important advantage of the MBF is that it does not require finding the inverse of the covariance matrix. Bierman's derivation is based on the RTS smoother, which assumes that the underlying distributions are Gaussian. However, a derivation of the MBF based on the concept of the fixed point smoother, which does not require the Gaussian assumption, is given by Gibbs.
The MBF can also be used to perform consistency checks on the filter residuals and the difference between the value of a filter state after an update and the smoothed value of the state, that is .
The smoother calculations are done in two passes. The forward calculations involve a one-step-ahead predictor and are given by
The above system is known as the inverse Wiener-Hopf factor. The backward recursion is the adjoint of the above forward system. The result of the backward pass may be calculated by operating the forward equations on the time-reversed and time reversing the result. In the case of output estimation, the smoothed estimate is given by
Taking the causal part of this minimum-variance smoother yields
which is identical to the minimum-variance Kalman filter. The above solutions minimize the variance of the output estimation error. Note that the Rauch–Tung–Striebel smoother derivation assumes that the underlying distributions are Gaussian, whereas the minimum-variance solutions do not. Optimal smoothers for state estimation and input estimation can be constructed similarly.
A continuous-time version of the above smoother is described in.
Expectation–maximization algorithms may be employed to calculate approximate maximum likelihood estimates of unknown state-space parameters within minimum-variance filters and smoothers. Often uncertainties remain within problem assumptions. A smoother that accommodates uncertainties can be designed by adding a positive definite term to the Riccati equation.
In cases where the models are nonlinear, step-wise linearizations may be within the minimum-variance filter and smoother recursions (extended Kalman filtering).
Typically, a frequency shaping function is used to weight the average power of the error spectral density in a specified frequency band. Let denote the output estimation error exhibited by a conventional Kalman filter. Also, let denote a causal frequency weighting transfer function. The optimum solution which minimizes the variance of arises by simply constructing .
The design of remains an open question. One way of proceeding is to identify a system which generates the estimation error and setting equal to the inverse of that system. This procedure may be iterated to obtain mean-square error improvement at the cost of increased filter order. The same technique can be applied to smoothers.
The most common variants of Kalman filters for non-linear systems are the Extended Kalman Filter and Unscented Kalman filter. The suitability of which filter to use depends on the non-linearity indices of the process and observation model.
The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian matrix) is computed.
At each timestep the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the nonlinear function around the current estimate.
A simple choice of sigma points and weights for in the UKF algorithm is
The weight of the mean value, , can be chosen arbitrarily.
Another popular parameterization (which generalizes the above) is
and control the spread of the sigma points. is related to the distribution of . Note that this is an overparameterization in the sense that any one of , and can be chosen arbitrarily.
Appropriate values depend on the problem at hand, but a typical recommendation is , , and . If the true distribution of is Gaussian, is optimal.
Given estimates of the mean and covariance, and , one obtains sigma points as described in the section above. The sigma points are propagated through the transition function f.
The propagated sigma points are weighed to produce the predicted mean and covariance.
Then the empirical mean and covariance of the transformed points are calculated.
The Kalman gain is
The updated mean and covariance estimates are
Under a stationary state model
It is based on the state space model
where and represent the intensities of the two white noise terms and , respectively.
The filter consists of two differential equations, one for the state estimate and one for the covariance:
where the Kalman gain is given by
Note that in this expression for the covariance of the observation noise represents at the same time the covariance of the prediction error (or innovation) ; these covariances are equal only in the case of continuous time.
The distinction between the prediction and update steps of discrete-time Kalman filtering does not exist in continuous time.
The second differential equation, for the covariance, is an example of a Riccati equation. Nonlinear generalizations to Kalman–Bucy filters include continuous time extended Kalman filter.
where
The prediction equations are derived from those of continuous-time Kalman filter without update from measurements, i.e., . The predicted state and covariance are calculated respectively by solving a set of differential equations with the initial value equal to the estimate at the previous step.
For the case of linear time invariant systems, the continuous time dynamics can be exactly discretized into a discrete time system using matrix exponentials.
The update equations are identical to those of the discrete-time Kalman filter.
Derivations
Deriving the posteriori estimate covariance matrix
Kalman gain derivation
\mathbf{P}_{k\mid k} & = \mathbf{P}_{k\mid k-1} - \mathbf{K}_k \mathbf{H}_k \mathbf{P}_{k\mid k-1} - \mathbf{P}_{k\mid k-1} \mathbf{H}_k^\textsf{T} \mathbf{K}_k^\textsf{T} + \mathbf{K}_k \left(\mathbf{H}_k \mathbf{P}_{k\mid k-1} \mathbf{H}_k^\textsf{T} + \mathbf{R}_k\right) \mathbf{K}_k^\textsf{T} \\[6pt]
&= \mathbf{P}_{k\mid k-1} - \mathbf{K}_k \mathbf{H}_k \mathbf{P}_{k\mid k-1} - \mathbf{P}_{k\mid k-1} \mathbf{H}_k^\textsf{T} \mathbf{K}_k^\textsf{T} + \mathbf{K}_k \mathbf{S}_k\mathbf{K}_k^\textsf{T}
\end{align}
\mathbf{K}_k \mathbf{S}_k &= \left(\mathbf{H}_k \mathbf{P}_{k\mid k-1}\right)^\textsf{T} = \mathbf{P}_{k\mid k-1} \mathbf{H}_k^\textsf{T} \\
\Rightarrow \mathbf{K}_k &= \mathbf{P}_{k\mid k-1} \mathbf{H}_k^\textsf{T} \mathbf{S}_k^{-1}
\end{align}
Simplification of the posteriori error covariance formula
Sensitivity analysis
target="_blank" rel="nofollow"> "False information injection attack on dynamic state estimation in multi-sensor systems", Fusion 2014
Factored form
Parallel form
Relationship to recursive Bayesian estimation
p\left(\mathbf{x}_k \mid \mathbf{x}_{k-1}\right) &= \mathcal{N}\left(\mathbf{F}_k\mathbf{x}_{k-1}, \mathbf{Q}_k\right) \\
p\left(\mathbf{z}_k \mid \mathbf{x}_k\right) &= \mathcal{N}\left(\mathbf{H}_k\mathbf{x}_k, \mathbf{R}_k\right) \\
p\left(\mathbf{x}_{k-1} \mid \mathbf{Z}_{k-1}\right) &= \mathcal{N}\left(\hat{\mathbf{x}}_{k-1}, \mathbf{P}_{k-1}\right)
\end{align}
Marginal likelihood
p(\mathbf{z}) &= \prod_{k=0}^T \int p\left(\mathbf{z}_k \mid \mathbf{x}_k\right) p\left(\mathbf{x}_k \mid \mathbf{z}_{k-1}, \ldots,\mathbf{z}_0\right) d\mathbf{x}_k\\
&= \prod_{k=0}^T \int \mathcal{N}\left(\mathbf{z}_k; \mathbf{H}_k\mathbf{x}_k, \mathbf{R}_k\right) \mathcal{N}\left(\mathbf{x}_k; \hat{\mathbf{x}}_{k \mid k-1}, \mathbf{P}_{k \mid k-1}\right) d\mathbf{x}_k\\
&= \prod_{k=0}^T \mathcal{N}\left(\mathbf{z}_k; \mathbf{H}_k\hat{\mathbf{x}}_{k \mid k-1}, \mathbf{R}_k + \mathbf{H}_k \mathbf{P}_{k \mid k-1} \mathbf{H}_k^\textsf{T}\right)\\
&= \prod_{k=0}^T \mathcal{N}\left(\mathbf{z}_k; \mathbf{H}_k\hat{\mathbf{x}}_{k \mid k-1}, \mathbf{S}_k\right),
\end{align}
where is the dimension of the measurement vector.
Information filter
\mathbf{Y}_{k \mid k} &= \mathbf{P}_{k \mid k}^{-1} \\
\hat{\mathbf{y}}_{k \mid k} &= \mathbf{P}_{k \mid k}^{-1}\hat{\mathbf{x}}_{k \mid k}
\end{align}
\mathbf{Y}_{k \mid k-1} &= \mathbf{P}_{k \mid k-1}^{-1} \\
\hat{\mathbf{y}}_{k \mid k-1} &= \mathbf{P}_{k \mid k-1}^{-1}\hat{\mathbf{x}}_{k \mid k-1}
\end{align}
\mathbf{I}_k &= \mathbf{H}_k^\textsf{T} \mathbf{R}_k^{-1} \mathbf{H}_k \\
\mathbf{i}_k &= \mathbf{H}_k^\textsf{T} \mathbf{R}_k^{-1} \mathbf{z}_k
\end{align}
\mathbf{Y}_{k \mid k} &= \mathbf{Y}_{k \mid k-1} + \mathbf{I}_k \\
\hat{\mathbf{y}}_{k \mid k} &= \hat{\mathbf{y}}_{k \mid k-1} + \mathbf{i}_k
\end{align}
\mathbf{Y}_{k \mid k} &= \mathbf{Y}_{k \mid k-1} + \sum_{j=1}^N \mathbf{I}_{k,j} \\
\hat{\mathbf{y}}_{k \mid k} &= \hat{\mathbf{y}}_{k \mid k-1} + \sum_{j=1}^N \mathbf{i}_{k,j}
\end{align}
\mathbf{M}_k &=
\left[\mathbf{F}_k^{-1}\right]^\textsf{T} \mathbf{Y}_{k-1 \mid k-1} \mathbf{F}_k^{-1} \\
\mathbf{C}_k &=
\mathbf{M}_k \left[\mathbf{M}_k + \mathbf{Q}_k^{-1}\right]^{-1} \\
\mathbf{L}_k &=
\mathbf{I} - \mathbf{C}_k \\
\mathbf{Y}_{k \mid k-1} &=
\mathbf{L}_k \mathbf{M}_k +
\mathbf{C}_k \mathbf{Q}_k^{-1} \mathbf{C}_k^\textsf{T} \\
\hat{\mathbf{y}}_{k \mid k-1} &=
\mathbf{L}_k \left[\mathbf{F}_k^{-1}\right]^\textsf{T} \hat{\mathbf{y}}_{k-1 \mid k-1}
\end{align}
Fixed-lag smoother
\begin{bmatrix}
\hat{\mathbf{x}}_{t \mid t} \\
\hat{\mathbf{x}}_{t-1 \mid t} \\
\vdots \\
\hat{\mathbf{x}}_{t-N+1 \mid t} \\
\end{bmatrix}
=
\begin{bmatrix}
\mathbf{I} \\
0 \\
\vdots \\
0 \\
\end{bmatrix}
\hat{\mathbf{x}}_{t \mid t-1}
+
\begin{bmatrix}
0 & \ldots & 0 \\
\mathbf{I} & 0 & \vdots \\
\vdots & \ddots & \vdots \\
0 & \ldots & \mathbf{I} \\
\end{bmatrix}
\begin{bmatrix}
\hat{\mathbf{x}}_{t-1 \mid t-1} \\
\hat{\mathbf{x}}_{t-2 \mid t-1} \\
\vdots \\
\hat{\mathbf{x}}_{t-N+1 \mid t-1} \\
\end{bmatrix}
+
\begin{bmatrix}
\mathbf{K}^{(0)} \\
\mathbf{K}^{(1)} \\
\vdots \\
\mathbf{K}^{(N-1)} \\
\end{bmatrix}
\mathbf{y}_{t \mid t-1}
\mathbf{K}^{(i+1)} =
\mathbf{P}^{(i)} \mathbf{H}^\textsf{T}
\left[
\mathbf{H} \mathbf{P} \mathbf{H}^\textsf{T} + \mathbf{R}
\right]^{-1}
\mathbf{P}^{(i)} =
\mathbf{P}
\left[
\left(
\mathbf{F} - \mathbf{K} \mathbf{H}
\right)^\textsf{T}
\right]^i
\mathbf{P}_i :=
E \left[
\left(
\mathbf{x}_{t-i} - \hat{\mathbf{x}}_{t-i \mid t}
\right)^{*}
\left(
\mathbf{x}_{t-i} - \hat{\mathbf{x}}_{t-i \mid t}
\right)
\mid
z_1 \ldots z_t
\right],
\mathbf{P} - \mathbf{P}_i =
\sum_{j = 0}^i
\left[
\mathbf{P}^{(j)} \mathbf{H}^\textsf{T}
\left(
\mathbf{H} \mathbf{P} \mathbf{H}^\textsf{T} + \mathbf{R}
\right)^{-1}
\mathbf{H} \left( \mathbf{P}^{(i)} \right)^\textsf{T}
\right]
Fixed-interval smoothers
Rauch–Tung–Striebel
\hat{\mathbf{x}}_{k \mid n} &= \hat{\mathbf{x}}_{k \mid k} + \mathbf{C}_k \left(\hat{\mathbf{x}}_{k+1 \mid n} - \hat{\mathbf{x}}_{k+1 \mid k}\right) \\
\mathbf{P}_{k \mid n} &= \mathbf{P}_{k \mid k} + \mathbf{C}_k \left(\mathbf{P}_{k+1 \mid n} - \mathbf{P}_{k+1 \mid k}\right) \mathbf{C}_k^\textsf{T}
\end{align}
Modified Bryson–Frazier smoother
\tilde{\Lambda}_k &= \mathbf{H}_k^\textsf{T} \mathbf{S}_k^{-1} \mathbf{H}_k + \hat{\mathbf{C}}_k^\textsf{T} \hat{\Lambda}_k \hat{\mathbf{C}}_k \\
\hat{\Lambda}_{k-1} &= \mathbf{F}_k^\textsf{T}\tilde{\Lambda}_k\mathbf{F}_k \\
\hat{\Lambda}_n &= 0 \\
\tilde{\lambda}_k &= -\mathbf{H}_k^\textsf{T} \mathbf{S}_k^{-1} \mathbf{y}_k + \hat{\mathbf{C}}_k^\textsf{T} \hat{\lambda}_k \\
\hat{\lambda}_{k-1} &= \mathbf{F}_k^\textsf{T}\tilde{\lambda}_k \\
\hat{\lambda}_n &= 0
\end{align}
\mathbf{P}_{k \mid n} &= \mathbf{P}_{k \mid k} - \mathbf{P}_{k \mid k}\hat{\Lambda}_k\mathbf{P}_{k \mid k} \\
\mathbf{x}_{k \mid n} &= \mathbf{x}_{k \mid k} - \mathbf{P}_{k \mid k}\hat{\lambda}_k
\end{align}
\mathbf{P}_{k \mid n} &= \mathbf{P}_{k \mid k-1} - \mathbf{P}_{k \mid k-1}\tilde{\Lambda}_k\mathbf{P}_{k \mid k-1} \\
\mathbf{x}_{k \mid n} &= \mathbf{x}_{k \mid k-1} - \mathbf{P}_{k \mid k-1}\tilde{\lambda}_k.
\end{align}
Minimum-variance smoother
\hat{\mathbf{x}}_{k+1 \mid k} &= (\mathbf{F}_k - \mathbf{K}_k\mathbf{H}_k)\hat{\mathbf{x}}_{k \mid k-1} + \mathbf{K}_k\mathbf{z}_k \\
\alpha_k &= -\mathbf{S}_k^{-\frac{1}{2}}\mathbf{H}_k\hat{\mathbf{x}}_{k \mid k-1} + \mathbf{S}_k^{-\frac{1}{2}}\mathbf{z}_k
\end{align}
Frequency-weighted Kalman filters
Nonlinear filters
Extended Kalman filter
\mathbf{x}_k &= f(\mathbf{x}_{k-1}, \mathbf{u}_k) + \mathbf{w}_k \\
\mathbf{z}_k &= h(\mathbf{x}_k) + \mathbf{v}_k
\end{align}
Unscented Kalman filter
Sigma points
attributed with
\mathbf{s}_0&=\hat \mathbf{x}_{k-1\mid k-1}\\
-1&
\mathbf{s}_0&=\hat \mathbf{x}_{k-1\mid k-1}\\
W_0^a&= \frac{\alpha^2\kappa-L}{\alpha^2\kappa}\\
W_0^c&= W_0^a + 1-\alpha^2+\beta \\
\mathbf{s}_j&=\hat \mathbf{x}_{k-1\mid k-1} + \alpha\sqrt{\kappa} \mathbf{A}_j, \quad j=1, \dots, L\\
\mathbf{s}_{L+j}&=\hat \mathbf{x}_{k-1\mid k-1} - \alpha\sqrt{\kappa} \mathbf{A}_j, \quad j=1, \dots, L\\
W_j^a&=W_j^c=\frac{1}{2\alpha^2\kappa}, \quad j=1, \dots, 2L.
\end{align}
Predict
\hat{\mathbf{x}}_{k \mid k-1} &= \sum_{j=0}^{2L} W_j^a \mathbf{x}_j \\
\mathbf{P}_{k \mid k-1} &= \sum_{j=0}^{2L} W_j^c \left(\mathbf{x}_j - \hat{\mathbf{x}}_{k \mid k-1}\right)\left(\mathbf{x}_j - \hat{\mathbf{x}}_{k \mid k-1}\right)^\textsf{T}+\mathbf{Q}_k
\end{align}
where are the first-order weights of the original sigma points, and are the second-order weights. The matrix is the covariance of the transition noise, .
Update
\hat{\mathbf{z}} &= \sum_{j=0}^{2L} W_j^a \mathbf{z}_j \\[6pt]
\hat{\mathbf{S}}_k &= \sum_{j=0}^{2L} W_j^c (\mathbf{z}_j-\hat{\mathbf{z}})(\mathbf{z}_j-\hat{\mathbf{z}})^\textsf{T} + \mathbf{R_k}
\end{align}
where is the covariance matrix of the observation noise, . Additionally, the cross covariance matrix is also needed
\mathbf{C_{xz}} &= \sum_{j=0}^{2L} W_j^c (\mathbf{x}_j-\hat\mathbf{x}_{k|k-1})(\mathbf{z}_j-\hat\mathbf{z})^\textsf{T}.
\end{align}
\mathbf{K}_k=\mathbf{C_{xz}}\hat{\mathbf{S}}_k^{-1}.
\end{align}
\begin{align}
\hat\mathbf{x}_{k\mid k}&=\hat\mathbf{x}_{k|k-1}+\mathbf{K}_k(\mathbf{z}_k-\hat\mathbf{z})\\
\mathbf{P}_{k\mid k}&=\mathbf{P}_{k\mid k-1}-\mathbf{K}_k\hat{\mathbf{S}}_k\mathbf{K}_k^\textsf{T}.
\end{align}
Discriminative Kalman filter
p(\mathbf{z}_k\mid\mathbf{x}_k) \approx
\frac{p(\mathbf{x}_k\mid\mathbf{z}_k)}{p(\mathbf{x}_k)}
where for nonlinear functions . This replaces the generative specification of the standard Kalman filter with a discriminative model for the latent states given observations.
\begin{align}
p(\mathbf{x}_1) &= \mathcal{N}(0, \mathbf{T}), \\
p(\mathbf{x}_k\mid\mathbf{x}_{k-1}) &=
\mathcal{N}(\mathbf{F}\mathbf{x}_{k-1}, \mathbf{C}),
\end{align}
where , if
p(\mathbf{x}_k\mid\mathbf{z}_{1:k}) \approx \mathcal{N}(\hat{\mathbf{x}}_{k|k-1}, \mathbf{P}_{k|k-1}),
then given a new observation , it follows that
p(\mathbf{x}_{k+1}\mid\mathbf{z}_{1:k+1}) \approx \mathcal{N}(\hat{ \mathbf{x}}_{k+1|k}, \mathbf{P}_{k+1|k})
where
\begin{align}
\mathbf{M}_{k+1} &= \mathbf{F}\mathbf{P}_{k|k-1}\mathbf{F}^\intercal + \mathbf{C}, \\
\mathbf{P}_{k+1|k} &= (\mathbf{M}_{k+1}^{-1} + Q(\mathbf{z}_k)^{-1} - \mathbf{T}^{-1})^{-1}, \\
\hat{\mathbf{x}}_{k+1|k} &= \mathbf{P}_{k+1|k} (\mathbf{M}_{k+1}^{-1}\mathbf{F}\hat{\mathbf{x}}_{k|k-1} + \mathbf{P}_{k+1|k}^{-1}g(\mathbf{z}_k) ).
\end{align}
Note that this approximation requires to be positive-definite; in the case that it is not,
\mathbf{P}_{k+1|k} = (\mathbf{M}_{k+1}^{-1} + Q(\mathbf{z}_k)^{-1})^{-1}
is used instead. Such an approach proves particularly useful when the dimensionality of the observations is much greater than that of the latent states and can be used build filters that are particularly robust to nonstationarities in the observation model.
Adaptive Kalman filter
Kalman–Bucy filter
\frac{d}{dt}\mathbf{x}(t) &= \mathbf{F}(t)\mathbf{x}(t) + \mathbf{B}(t)\mathbf{u}(t) + \mathbf{w}(t) \\
\mathbf{z}(t) &= \mathbf{H}(t) \mathbf{x}(t) + \mathbf{v}(t)
\end{align}
\frac{d}{dt}\hat{\mathbf{x}}(t) &= \mathbf{F}(t)\hat{\mathbf{x}}(t) + \mathbf{B}(t)\mathbf{u}(t) + \mathbf{K}(t) \left(\mathbf{z}(t) - \mathbf{H}(t)\hat{\mathbf{x}}(t)\right) \\
\frac{d}{dt}\mathbf{P}(t) &= \mathbf{F}(t)\mathbf{P}(t) + \mathbf{P}(t)\mathbf{F}^\textsf{T}(t) + \mathbf{Q}(t) - \mathbf{K}(t)\mathbf{R}(t)\mathbf{K}^\textsf{T}(t)
\end{align}
Hybrid Kalman filter
\dot{\mathbf{x}}(t) &= \mathbf{F}(t)\mathbf{x}(t) + \mathbf{B}(t)\mathbf{u}(t) + \mathbf{w}(t),
&\mathbf{w}(t) &\sim N\left(\mathbf{0}, \mathbf{Q}(t)\right) \\
\mathbf{z}_k &= \mathbf{H}_k\mathbf{x}_k + \mathbf{v}_k,
&\mathbf{v}_k &\sim N(\mathbf{0},\mathbf{R}_k)
\end{align}
Initialize
Predict
\dot{\hat{\mathbf{x}}}(t) &= \mathbf{F}(t) \hat{\mathbf{x}}(t) + \mathbf{B}(t) \mathbf{u}(t)
\text{, with } \hat{\mathbf{x}}\left(t_{k-1}\right) = \hat{\mathbf{x}}_{k-1 \mid k-1} \\
\Rightarrow \hat{\mathbf{x}}_{k \mid k-1} &= \hat{\mathbf{x}}\left(t_k\right) \\
\dot{\mathbf{P}}(t) &= \mathbf{F}(t)\mathbf{P}(t) + \mathbf{P}(t)\mathbf{F}(t)^\textsf{T} + \mathbf{Q}(t)
\text{, with } \mathbf{P}\left(t_{k-1}\right) = \mathbf{P}_{k-1 \mid k-1} \\
\Rightarrow \mathbf{P}_{k \mid k-1} &= \mathbf{P}\left(t_k\right)
\end{align}
Update
\mathbf{K}_k &= \mathbf{P}_{k \mid k-1}\mathbf{H}_k^\textsf{T} \left(\mathbf{H}_k\mathbf{P}_{k \mid k-1}\mathbf{H}_k^\textsf{T} + \mathbf{R}_k\right)^{-1} \\
\hat{\mathbf{x}}_{k \mid k} &= \hat{\mathbf{x}}_{k \mid k-1} + \mathbf{K}_k\left(\mathbf{z}_k - \mathbf{H}_k\hat{\mathbf{x}}_{k \mid k-1}\right) \\
\mathbf{P}_{k \mid k} &= \left(\mathbf{I} - \mathbf{K}_k\mathbf{H}_k\right)\mathbf{P}_{k \mid k-1}
\end{align}
Variants for the recovery of sparse signals
Relation to Gaussian processes
Applications
See also
Further reading
External links
|
|